-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Conversation
Notifying subscribers in CODENOTIFY files for diff aedcaf7...263717f.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, please add keegan or somebody from cloud as a reviewer. endpoint.go is used to discover zoekt service endpoints so it's rather important :-)
// namespace returns the namespace the pod is currently running in | ||
// this is done because the k8s client we previously used set the namespace | ||
// when the client was created, the official k8s client does not | ||
func namespace() string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please rewrite this a little to log the error if it happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rewritten
Codecov Report
@@ Coverage Diff @@
## main #15486 +/- ##
==========================================
+ Coverage 50.89% 52.47% +1.57%
==========================================
Files 1736 1614 -122
Lines 86759 80634 -6125
Branches 7874 7168 -706
==========================================
- Hits 44156 42311 -1845
+ Misses 38711 34617 -4094
+ Partials 3892 3706 -186
*This pull request uses carry forward flags. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I'll review the code soon, but at a high level this pulls in a crap load of transitive dependencies. We used to use client-go, and constantly ran into dependency issues due to how much it depends on and how they change. Why do you need client-go? Does this simple client not solve it for you? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving. Some comments inline. This looks like it works, my main concern is that client-go is a seven tonne gorilla that I'd rather avoid in our dependency tree.
@@ -102,21 +85,23 @@ func (cs *clusterScanner) runEventLoop() { | |||
// watchEndpointEvents uses the k8s watch API operation to watch for endpoint events. Spins forever unless an error | |||
// occurs that would necessitate creating a new watcher. The caller will then call again creating the new watcher. | |||
func (cs *clusterScanner) watchEndpointEvents() (bool, error) { | |||
watcher, err := cs.client.Watch(context.Background(), cs.client.Namespace(), new(corev1.Endpoints)) | |||
|
|||
// TODO(Dax): Rewrite this to used NewSharedInformerFactory from k8s/client-go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you may have to look at older versions of this repo, but I believe we used to use informer back when this package used client-go.
for { | ||
var eps corev1.Endpoints | ||
eventType, err := watcher.Next(&eps) | ||
event := <-watcher.ResultChan() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should be able to do for event := range watcher.ResultChan()
defer watcher.Close() | ||
func inform(client v1.EndpointsInterface, m *Map, u *k8sURL) error { | ||
|
||
// TODO(Dax): We shouldn't use watch directly, use an informer here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI if watch is simpler, we should just use that. I think Informer may be simple, I can't remember exactly :) But Informer is designed to help scale / simplify access, but we only care about simplicity not scale in this case.
eventType, err := watcher.Next(&endpoints) | ||
if err != nil { | ||
return errors.Wrap(err, "watcher.Next") | ||
event := <-watcher.ResultChan() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as the previous code snippet, you can for loop over the chan
e := event.Object | ||
endpoints, ok := e.(*corev1.Endpoints) | ||
if !ok { | ||
return errors.Wrap(err, "object from watcher is not an endpoint") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have you manually tested this? Should be fine, but this sort of runtime reflection can slip through the cracks.
return "default" | ||
} | ||
return ns | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comments from the other place this was copied from.
After your comment about this pulling in more dependencies, I checked the frontend binary size before and after this change:
This is a 72% increase in our binary size which feels excessive for what is essentially a refactor to use a more well-documented client. I knew that client-go might have some issues with other dependencies but there are no conflicts and the post 1.17.0 versions are fairly easy to update. |
Co-authored-by: Keegan Carruthers-Smith <[email protected]>
Co-authored-by: Keegan Carruthers-Smith <[email protected]>
c0145db
to
817d48d
Compare
I also spent some time today trying to upgrade https://github.com/ericchiang/k8s but the protobuf generation no longer works |
Yeah I am guessing this has calmed down vs a few years ago.
@daxmc99 you wanna pair tomorrow to try and sort this out? I'm sure making our current dep work on modern protobuf will make others happy. |
The cluster QA tests are currently running via a pure bash implementation. The upstream project has been archived though so we should avoid new uses of |
The cluster QA tests are currently running via a pure bash
implementation.
Cool. So just using kubectl?
The upstream project has been archived though so we should avoid
new uses of `ericchiang/k8s` where possible.
Should we consider forking and maintaining it? It seems like not
that much effort to be honest and in our interests to keep up to
date. I imagine keeping it up to date will be easier than updating
deps on the offical k8s client :)
|
Cool. So just using kubectl?
Correct
Should we consider forking and maintaining it?
I still hesitate to use a non-official client. I wonder if the way forward
here is actually to reduce scope further and drop a client altogether and
build our own basic client (
https://github.com/cilium/cilium/blob/master/pkg/k8s/client.go#L93:6). We
intentionally use the k8s API for very little and our current
dependency actually blocks us from using newer versions of protobuf.
…On Wed, Dec 2, 2020 at 5:09 AM Keegan Carruthers-Smith < ***@***.***> wrote:
> The cluster QA tests are currently running via a pure bash
> implementation.
Cool. So just using kubectl?
> The upstream project has been archived though so we should avoid
> new uses of `ericchiang/k8s` where possible.
Should we consider forking and maintaining it? It seems like not
that much effort to be honest and in our interests to keep up to
date. I imagine keeping it up to date will be easier than updating
deps on the offical k8s client :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/sourcegraph/sourcegraph/pull/15486#issuecomment-737190465>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHS5HJQSSHOJH7VQAHDDBLDSSYVANANCNFSM4TLYSKZQ>
.
|
> Should we consider forking and maintaining it?
I still hesitate to use a non-official client. I wonder if the
way forward here is actually to reduce scope further and drop a
client altogether and build our own basic client
I'm all for doing that. Our use of the API is super tiny. Its
watch and get on endpoints... and I think maybe a get on
svc. Thats it. In fact I learnt how to do what I wanted by running
kubectl in verbose mode anyways, which just output the urls to
hit. Sounds like fun :)
|
👋 i'm gonna pick this up to resolve conflicts & merge, as it's blocking some work for secret encryption. The GCloud KMS library depends on a newer protobuf version, and won't compile. I've pulled these changes into my local branch and everything looks fine. context: https://sourcegraph.slack.com/archives/CHPC7UX16/p1613123508259400 |
Fixes https://github.com/sourcegraph/sourcegraph/issues/11804
This is needed as part of https://github.com/sourcegraph/sourcegraph/issues/13878 since without this we cannot use a current client-go (this client is ~2 years old)